Below command can be used to connect to OCNE 2.x OCK instance from the workstation CLI node.
ocne cluster console --node <node> --direct
Keywords
connect connecting ssh sshing nodes console consoles how to howto doc docs
Below command can be used to connect to OCNE 2.x OCK instance from the workstation CLI node.
ocne cluster console --node <node> --direct
Keywords
connect connecting ssh sshing nodes console consoles how to howto doc docs
Below are crictl and podman pull commands to pull OCK image of OCNE 2.x version. Change the OCK image version in below command as needed.
crictl pull container-registry.oracle.com/olcne/ock:1.32
podman pull container-registry.oracle.com/olcne/ock:1.32
Keywords:
OCNE Oracle Cloud Native Environment olcne CNE OCK Container Engine Kubernetes command commands ocne pulling
Below are steps to Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)
# ssh-keygen -t rsa
#sudo dnf -y install oracle-olcne-release-el8sudo dnf -y install oracle-olcne-release-el8Last metadata expiration check: 1:27:12 ago on Wed 26 Feb 2025 03:41:24 AM GMT.Dependencies resolved.===========================================================================================Package Architecture Version Repository Size===========================================================================================Installing:oracle-ocne-release-el8 x86_64 1.0-12.el8 ol8_baseos_latest 16 kTransaction Summary===========================================================================================Install 1 PackageTotal download size: 16 kInstalled size: 20 kDownloading Packages:oracle-ocne-release-el8-1.0-12.el8.x86_64.rpm 214 kB/s | 16 kB 00:00-------------------------------------------------------------------------------------------Total 209 kB/s | 16 kB 00:00Running transaction checkTransaction check succeeded.Running transaction testTransaction test succeeded.Running transactionPreparing : 1/1Installing : oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Running scriptlet: oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Verifying : oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Installed:oracle-ocne-release-el8-1.0-12.el8.x86_64Complete!
# sudo sed -i 's/ol8_developer_olcne/ol8_developer/g' /etc/yum.repos.d/oracle-ocne-ol8.repo
# sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
# sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_UEKR6
#sudo dnf repolist enabled
sudo dnf repolist enabledRepository ol8_developer is listed more than once in the configurationrepo id repo nameol8_MySQL84 MySQL 8.4 Server Community for Oracle Linux 8 (x86_64)ol8_MySQL84_tools_community MySQL 8.4 Tools Community for Oracle Linux 8 (x86_64)ol8_MySQL_connectors_community MySQL Connectors Community for Oracle Linux 8 (x86_64)ol8_UEKR7 Latest Unbreakable Enterprise Kernel Release 7 for Oracle Linux 8 (x86_64)ol8_addons Oracle Linux 8 Addons (x86_64)ol8_appstream Oracle Linux 8 Application Stream (x86_64)ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)ol8_ksplice Ksplice for Oracle Linux 8 (x86_64)ol8_kvm_appstream Oracle Linux 8 KVM Application Stream (x86_64)ol8_oci_included Oracle Software for OCI users on Oracle Linux 8 (x86_64)ol8_olcne19 Oracle Cloud Native Environment version 1.9 (x86_64)
# sudo dnf -y install olcnectl
sudo dnf -y install olcnectlRepository ol8_developer is listed more than once in the configurationOracle Linux 8 BaseOS Latest (x86_64) 215 kB/s | 4.3 kB 00:00 Oracle Linux 8 Application Stream (x86_64) 379 kB/s | 4.5 kB 00:00 Oracle Linux 8 Addons (x86_64) 286 kB/s | 3.5 kB 00:00 Oracle Cloud Native Environment version 1.9 (x86_64) 736 kB/s | 89 kB 00:00 Latest Unbreakable Enterprise Kernel Release 7 for Oracle 269 kB/s | 3.5 kB 00:00 Oracle Linux 8 KVM Application Stream (x86_64) 8.4 MB/s | 1.6 MB 00:00 Dependencies resolved.=========================================================================================== Package Architecture Version Repository Size===========================================================================================Installing: olcnectl x86_64 1.9.2-3.el8 ol8_olcne19 4.8 M
Transaction Summary===========================================================================================Install 1 Package
Total download size: 4.8 MInstalled size: 15 MDownloading Packages:olcnectl-1.9.2-3.el8.x86_64.rpm 21 MB/s | 4.8 MB 00:00 -------------------------------------------------------------------------------------------Total 20 MB/s | 4.8 MB 00:00 Running transaction checkTransaction check succeeded.Running transaction testTransaction test succeeded.Running transaction Preparing : 1/1 Installing : olcnectl-1.9.2-3.el8.x86_64 1/1 Verifying : olcnectl-1.9.2-3.el8.x86_64 1/1
Installed: olcnectl-1.9.2-3.el8.x86_64
Complete![opc@rhck-opr yum.repos.d]$
# olcnectl provision \--api-server rhck-opr \--control-plane-nodes rhck-ctrl \--worker-nodes rhck-wrkr \--environment-name cne-rhck-env \--name cne-rhck-nonha-cluster \--yes
#olcnectl provision \> --api-server rhck-opr \> --control-plane-nodes rhck-ctrl \> --worker-nodes rhck-wrkr \> --environment-name cne-rhck-env \> --name cne-rhck-nonha-cluster \> --yesINFO[26/02/25 05:34:51] Generating certificate authority INFO[26/02/25 05:34:51] Generating certificate for rhck-opr INFO[26/02/25 05:34:51] Generating certificate for rhck-ctrl INFO[26/02/25 05:34:52] Generating certificate for rhck-wrkr INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.key" to "/etc/olcne/certificates/node.key" on rhck-opr INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.key" to "/etc/olcne/certificates/node.key" on rhck-ctrl INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.key" to "/etc/olcne/certificates/node.key" on rhck-wrkr INFO[26/02/25 05:34:53] Apply api-server configuration on rhck-opr:* Install oracle-olcne-release* Enable olcne19 repo* Install API Server Add firewall port 8091/tcp INFO[26/02/25 05:34:53] Apply control-plane configuration on rhck-ctrl:* Install oracle-olcne-release* Enable olcne19 repo* Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp* Disable swap* Load br_netfilter module* Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1* Set SELinux to permissive* Install and enable olcne-agent INFO[26/02/25 05:34:53] Apply worker configuration on rhck-wrkr:* Install oracle-olcne-release* Enable olcne19 repo* Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp* Disable swap* Load br_netfilter module* Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1* Set SELinux to permissive* Install and enable olcne-agent Environment cne-rhck-env created.Modules created successfully.Modules installed successfully.INFO[26/02/25 05:49:19] Kubeconfig for instance "cne-rhck-nonha-cluster" in environment "cne-rhck-env" written to kubeconfig.cne-rhck-env.cne-rhck-nonha-cluster
olcnectl module instances \--api-server rhck-opr:8091 \--environment-name cne-rhck-env \--update-config
olcnectl module instances --environment-name cne-rhck-env
#olcnectl module instances --environment-name cne-rhck-envINSTANCE MODULE STATE rhck-ctrl:8090 node installedrhck-wrkr:8090 node installedcne-rhck-nonha-cluster kubernetes installed
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configexport KUBECONFIG=$HOME/.kube/configecho 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
kubectl get nodeskubectl get pods -A
# kubectl get nodes
NAME STATUS ROLES AGE VERSIONrhck-ctrl Ready control-plane 11m v1.29.9+3.el8rhck-wrkr Ready <none> 10m v1.29.9+3.el8
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-5859f68d4-2z6vq 1/1 Running 0 11mkube-system coredns-5859f68d4-lqxxk 1/1 Running 0 11mkube-system etcd-rhck-ctrl 1/1 Running 0 11mkube-system kube-apiserver-rhck-ctrl 1/1 Running 0 11mkube-system kube-controller-manager-rhck-ctrl 1/1 Running 0 11mkube-system kube-flannel-ds-gz548 1/1 Running 0 8m49skube-system kube-flannel-ds-rmpdt 1/1 Running 0 8m49skube-system kube-proxy-ffnzs 1/1 Running 0 10mkube-system kube-proxy-n7kxf 1/1 Running 0 11mkube-system kube-scheduler-rhck-ctrl 1/1 Running 0 11mkubernetes-dashboard kubernetes-dashboard-547d4b479c-fnjtf 1/1 Running 0 8m48socne-modules verrazzano-module-operator-6cb74478bf-xv8z2 1/1 Running 0 8m48s
OCNE backups can be taken using below command
olcnectl module backup \
--environment-name myenvironment \
--name mycluster
Below documentation has more information on this.
https://docs.oracle.com/en/operating-systems/olcne/1.6/orchestration/backup-restore.html#backup
OCNE Backups Location is /var/olcne/backups/environment-name/kubernetes/module-name/timestamp
Below is the procedure
1. List the current OCNE nodes using olcnectl command on Operator node as follows.
In below command change the environment name as needed.olcnectl module instances --environment-name cne1-ha-env
INFO[13/10/23 03:07:56] Starting local API server
INFO[13/10/23 03:07:57] Starting local API server
INSTANCE MODULE STATE
cne1-ha-control1:8090 node installed
cne1-ha-helm helm created
cne1-ha-istio istio installed
cne1-ha-worker1:8090 node installed
cne1-ha-worker2:8090 node installed
grafana grafana installed
prometheus prometheus installed
cne1-ha-cluster kubernetes installed
In above example sample output we just have one control node.
olcnectl module update \
--environment-name cne1-ha-env \
--name cne1-ha-cluster \
--control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \
--worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090
$ olcnectl module update \
> --environment-name cne1-ha-env \
> --name cne1-ha-cluster \
> --control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \
> --worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090
INFO[13/10/23 03:09:19] Starting local API server
? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? (y/N? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? Yes
Taking backup of modules before update
Backup of modules succeeded.
Updating modules
Update successful
In below command change the environment name as needed.
olcnectl module instances --environment-name cne1-ha-env
[opc@cne1-ha-operator ~]$ olcnectl module instances --environment-name cne1-ha-env
INFO[13/10/23 03:13:37] Starting local API server
INFO[13/10/23 03:13:38] Starting local API server
INSTANCE MODULE STATE
cne1-ha-worker1:8090 node installed
cne1-ha-worker2:8090 node installed
cne1-ha-cluster kubernetes installed
cne1-ha-control1:8090 node installed
cne1-ha-istio istio installed
cne1-ha-helm helm created
grafana grafana installed
prometheus prometheus installed
cne1-ha-control2:8090 node installed
[opc@cne1-ha-operator ~]$
4. Now check whether the kube-system pods are running on the newly added node using below command on control node 1.
kubectl get pods -n kube-system
[opc@cne1-ha-operator ~]$ kubectl get pods -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[opc@cne1-ha-operator ~]$
[opc@cne1-ha-operator ~]$ ssh cne1-ha-control1
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Mon Oct 2 18:58:46 2023 from 10.0.1.247
[opc@cne1-ha-control1 ~]$
[opc@cne1-ha-control1 ~]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6fdffc7bfc-tzr9p 1/1 Running 10 113d
coredns-6fdffc7bfc-wrws4 1/1 Running 2 10d
etcd-cne1-ha-control1 1/1 Running 15 195d
etcd-cne1-ha-control2 1/1 Running 15 2d12h
kube-apiserver-cne1-ha-control1 1/1 Running 20 195d
kube-apiserver-cne1-ha-control2 1/1 Running 21 2d12h
kube-controller-manager-cne1-ha-control1 1/1 Running 21 195d
kube-controller-manager-cne1-ha-control2 1/1 Running 15 2d12h
kube-flannel-ds-8ttpr 1/1 Running 21 140d
kube-flannel-ds-r7pgc 1/1 Running 20 (15m ago) 140d
kube-flannel-ds-rvc7l 1/1 Running 20 140d
kube-flannel-ds-wlqzf 1/1 Running 3 (15m ago) 2d12h
kube-proxy-9hdzl 1/1 Running 15 195d
kube-proxy-hqx5n 1/1 Running 15 195d
kube-proxy-t5ckp 1/1 Running 15 195d
kube-proxy-xz9t5 1/1 Running 1 2d12h
kube-scheduler-cne1-ha-control1 1/1 Running 18 195d
kube-scheduler-cne1-ha-control2 1/1 Running 16 2d12h
metrics-server-77dfc8475-qskxw 1/1 Running 13 134d
Below command can be used.
olcnectl module instances --environment-name=<env name>
Below is sample output.
INFO[16/11/22 16:35:26] Starting local API server
INSTANCE MODULE STATE
control.Test.com:8090 node installed
ocne-cluster kubernetes installed
worker1.Test.com:8090 node installed
worker2.Test.com:8090 node installed